higher education
Why the Future of College Could Look Like OnlyFans
Universities have become generic, one professor and former dean argues. In the A.I. era, students may demand something they can't get elsewhere. Last week, I asked whether, as a forty-six-year-old father of two, I should keep contributing to my children's college funds, or if perhaps some combination of anti-establishment fervor, A.I., and a shifting economy could save me some money. I don't have a particularly good answer yet, at least not one good enough to inspire the purchase of a midlife-crisis car, my son's and daughter's futures be damned. But, after wrestling with that query in Part 1 of what will be a series of articles, I think there may be a better one to ask. The question is not, I think, "How will A.I. change higher education?" I wanted to talk with someone who stood outside the polite consensus which holds that college as we know it will survive, if only because, as I wrote last week, humans will always want to differentiate their children from other people's children.
- Health & Medicine (0.94)
- Education > Educational Setting > Higher Education (0.92)
Will A.I. Make College Obsolete?
Will A.I. Make College Obsolete? More and more people may decide that its stamp of approval isn't worth the cost. A few weeks ago, while I was dealing with taxes, it occurred to me that the money my wife and I were putting away in a college fund for our children might be better used somewhere else. This wasn't a novel musing, but it felt particularly pressing as I watched my account balance go down, a portion of its resources funnelled into something that can't be touched for at least the next nine years. When my nine-year-old daughter graduates from high school, in 2035, I asked myself, will the landscape of higher education look the way that it does now?
- Education > Educational Setting > Higher Education (1.00)
- Education > Educational Setting > Online (0.94)
The greatest risk of AI in higher education isn't cheating – it's the erosion of learning itself
Public debate about artificial intelligence in higher education has largely orbited a familiar worry: cheating . Will students use chatbots to write essays? Should universities ban the tech? But focusing so much on cheating misses the larger transformation already underway, one that extends far beyond student misconduct and even the classroom. Universities are adopting AI across many areas of institutional life .
The Accidental Winners of the War on Higher Ed
Go to a small liberal-arts college if you can. I n the waning heat of last summer, freshly back in my office at a major research university, I found myself considering the higher-education hellscape that had lately descended upon the nation. I'd spent months reporting on the Trump administration's attacks on universities for, speaking with dozens of administrators, faculty, and students about the billions of dollars in cuts to public funding for research and the resulting collapse of " college life ."At Initially, I surveyed the situation from the safe distance of a journalist who happens to also be a career professor and university administrator. I saw myself as an envoy between America's college campuses and its citizens, telling the stories of the people whose lives had been shattered by these transformations. By the summer, though, that safe distance had collapsed back on me.
- Law (1.00)
- Education > Educational Setting > Higher Education (1.00)
- Government > Regional Government > North America Government > United States Government (0.90)
AdvisingWise: Supporting Academic Advising in Higher Education Settings Through a Human-in-the-Loop Multi-Agent Framework
Jiang, Wendan, Wang, Shiyuan, Eltigani, Hiba, Haroon, Rukhshan, Faisal, Abdullah Bin, Dogar, Fahad
Academic advising is critical to student success in higher education, yet high student-to-advisor ratios limit advisors' capacity to provide timely support, particularly during peak periods. Recent advances in Large Language Models (LLMs) present opportunities to enhance the advising process. We present AdvisingWise, a multi-agent system that automates time-consuming tasks, such as information retrieval and response drafting, while preserving human oversight. AdvisingWise leverages authoritative institutional resources and adaptively prompts students about their academic backgrounds to generate reliable, personalized responses. All system responses undergo human advisor validation before delivery to students. We evaluate AdvisingWise through a mixed-methods approach: (1) expert evaluation on responses of 20 sample queries, (2) LLM-as-a-judge evaluation of the information retrieval strategy, and (3) a user study with 8 academic advisors to assess the system's practical utility. Our evaluation shows that AdvisingWise produces accurate, personalized responses. Advisors reported increasingly positive perceptions after using AdvisingWise, as their initial concerns about reliability and personalization diminished. We conclude by discussing the implications of human-AI synergy on the practice of academic advising.
- Research Report > New Finding (1.00)
- Instructional Material (1.00)
Bridging the Skills Gap: A Course Model for Modern Generative AI Education
Bardach, Anya, Murrah, Hamilton
Research on how the popularization of generative Artificial Intelligence (AI) tools impacts learning environments has led to hesitancy among educators to teach these tools in classrooms, creating two observed disconnects. Generative AI competency is increasingly valued in industry but not in higher education, and students are experimenting with generative AI without formal guidance. The authors argue students across fields must be taught to responsibly and expertly harness the potential of AI tools to ensure job market readiness and positive outcomes. Computer Science trajectories are particularly impacted, and while consistently top ranked U.S. Computer Science departments teach the mechanisms and frameworks underlying AI, few appear to offer courses on applications for existing generative AI tools. A course was developed at a private research university to teach undergraduate and graduate Computer Science students applications for generative AI tools in software development. Two mixed method surveys indicated students overwhelmingly found the course valuable and effective. Co-authored by the instructor and one of the graduate students, this paper explores the context, implementation, and impact of the course through data analysis and reflections from both perspectives. It additionally offers recommendations for replication in and beyond Computer Science departments. This is the extended version of this paper to include technical appendices.
- Instructional Material > Course Syllabus & Notes (1.00)
- Questionnaire & Opinion Survey (0.93)
- Research Report (0.82)
A Multi-level Analysis of Factors Associated with Student Performance: A Machine Learning Approach to the SAEB Microdata
Tertulino, Rodrigo, Almeida, Ricardo
Identifying the determinants of academic success in basic education represents a central challenge for educational research and policymaking, particularly in a country with Brazil's vast dimensions and socioeconomic heterogeneity (Issah et al. 2023). A systemic approach is crucial, as student performance is influenced by a complex interplay of factors spanning individual, academic, socioeconomic, and institutional domains (Barrag an Moreno and Guzm an Rinc on 2025). The System of Assessment of Basic Education (SAEB), conducted by the National Institute for Educational Studies and Research An ısio Teixeira (INEP) (INEP 2025), provides a rich, multi-level dataset uniquely suited for such an analysis (Bonamino et al. 2010). The public availability of its anonymized microdata enables the research community to investigate the intricate relationships between student proficiency and a wide array of contextual factors, from socioeconomic backgrounds to school infrastructure and teacher profiles. Consequently, the SAEB microdata is an essential resource for data-driven research aimed at informing and evaluating educational policies in the country (Lundberg and Lee 2017b; Mazoni and Oliveira 2023). While traditional statistical methods are common, the Educational Data Mining (EDM) paradigm offers powerful tools for uncovering complex, non-linear patterns from such data (Romero and Ventura 2010). Furthermore, we demonstrate that by interpreting the model's classification results with XAI techniques, our method provides data-driven insights for educators and policymakers (Idrizi 2024). The primary objective of this research is thus to develop and evaluate a multi-level machine learning model to identify the key systemic factors associated with the academic performance of 9th-grade and high school students, using the SAEB microdata. Building upon this perspective, the study shifts its analytical focus from purely individual student interventions toward addressing the systemic determinants that shape educational outcomes in Brazilian basic education.
- North America > United States (0.93)
- South America (0.67)
- Research Report > New Finding (1.00)
- Instructional Material (1.00)
- Education > Assessment & Standards > Student Performance (1.00)
- Education > Educational Setting > Higher Education (0.69)
- Education > Curriculum > Subject-Specific Education (0.67)
- Education > Educational Setting > K-12 Education > Secondary School (0.55)
AI-Driven Contribution Evaluation and Conflict Resolution: A Framework & Design for Group Workload Investigation
Slapek, Jakub, Seyedebrahimi, Mir, Jianhua, Yang
The equitable assessment of individual contribution in teams remains a persistent challenge, where conflict and disparity in workload can result in unfair performance evaluation, often requiring manual intervention - a costly and challenging process. We survey existing tool features and identify a gap in conflict resolution methods and AI integration. To address this, we propose a framework and implementation design for a novel AI-enhanced tool that assists in dispute investigation. The framework organises heterogeneous artefacts - submissions (code, text, media), communications (chat, email), coordination records (meeting logs, tasks), peer assessments, and contextual information - into three dimensions with nine benchmarks: Contribution, Interaction, and Role. Objective measures are normalised, aggregated per dimension, and paired with inequality measures (Gini index) to surface conflict markers. A Large Language Model (LLM) architecture performs validated and contextual analysis over these measures to generate interpretable and transparent advisory judgments. We argue for feasibility under current statutory and institutional policy, and outline practical analytics (sentimental, task fidelity, word/line count, etc.), bias safeguards, limitations, and practical challenges.
- South America > Uruguay > Maldonado > Maldonado (0.04)
- Europe > United Kingdom > England > Leicestershire > Leicester (0.04)
- Oceania > Australia > Western Australia (0.04)
- (5 more...)
- Law (1.00)
- Government (1.00)
- Education > Educational Setting (1.00)
- Education > Assessment & Standards (0.68)
Academics and Generative AI: Empirical and Epistemic Indicators of Policy-Practice Voids
As generative AI diffuses through academia, policy-practice divergence becomes consequential, creating demand for auditable indicators of alignment. This study prototypes a ten-item, indirect-elicitation instrument embedded in a structured interpretive framework to surface voids between institutional rules and practitioner AI use. The framework extracts empirical and epistemic signals from academics, yielding three filtered indicators of such voids: (1) AI-integrated assessment capacity (proxy) - within a three-signal screen (AI skill, perceived teaching benefit, detection confidence), the share who would fully allow AI in exams; (2) sector-level necessity (proxy) - among high output control users who still credit AI with high contribution, the proportion who judge AI capable of challenging established disciplines; and (3) ontological stance - among respondents who judge AI different in kind from prior tools, report practice change, and pass a metacognition gate, the split between material and immaterial views as an ontological map aligning procurement claims with evidence classes.
- Asia > Japan > Honshū > Kantō > Tokyo Metropolis Prefecture > Tokyo (0.41)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.05)
- Asia > Japan > Honshū > Kantō > Ibaraki Prefecture > Tsukuba (0.04)
- (4 more...)
AI & Data Competencies: Scaffolding holistic AI literacy in Higher Education
Kennedy, Kathleen, Gupta, Anuj
This chapter introduces the AI & Data Acumen Learning Outcomes Framework, a comprehensive tool designed to guide the integration of AI literacy across higher education. Developed through a collaborative process, the framework defines key AI and data-related competencies across four proficiency levels and seven knowledge dimensions. It provides a structured approach for educators to scaffold student learning in AI, balancing technical skills with ethical considerations and sociocultural awareness. The chapter outlines the framework's development process, its structure, and practical strategies for implementation in curriculum design, learning activities, and assessment. We address challenges in implementation and future directions for AI education. By offering a roadmap for developing students' holistic AI literacy, this framework prepares learners to leverage generative AI capabilities in both academic and professional contexts.
- Europe (0.14)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Arizona (0.04)
- Education > Educational Setting > Higher Education (0.91)
- Education > Curriculum (0.88)